ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.08312
134
3
v1v2 (latest)

Trainless Model Performance Estimation for Neural Architecture Search

10 March 2021
Ekaterina Gracheva
ArXiv (abs)PDFHTMLGithub
Main:14 Pages
11 Figures
Bibliography:3 Pages
3 Tables
Appendix:3 Pages
Abstract

Neural architecture search has become an indispensable part of the deep learning field. Modern methods allow to find out the best performing architectures for a task, or to build a network from scratch, but they usually require a tremendous amount of training. In this paper we present a simple method, allowing to discover a suitable architecture for a task based on its untrained performance. We introduce the metric score as the relative standard deviation of the untrained accuracy, which is the standard deviation divided by the mean. Statistics for each neural architecture are calculated over multiple initialisations with different seeds on a single batch of data. An architecture with the lowest metric score value has on average an accuracy of 91.90±2.2791.90 \pm 2.2791.90±2.27, 64.08±5.6364.08 \pm 5.6364.08±5.63 and 38.76±6.6238.76 \pm 6.6238.76±6.62 for CIFAR-10, CIFAR-100 and a downscaled version of ImageNet, respectively. The results show that a good architecture should be stable against initialisations before training. The procedure takes about 190190190 s for CIFAR and 133.9133.9133.9 s for ImageNet, on a batch of 256256256 images and 100100100 initialisations.

View on arXiv
Comments on this paper