ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.07406
  4. Cited By
Initialization Using Perlin Noise for Training Networks with a Limited
  Amount of Data

Initialization Using Perlin Noise for Training Networks with a Limited Amount of Data

19 January 2021
Nakamasa Inoue
Eisuke Yamagata
Hirokatsu Kataoka
ArXivPDFHTML

Papers citing "Initialization Using Perlin Noise for Training Networks with a Limited Amount of Data"

3 / 3 papers shown
Title
Scaling Backwards: Minimal Synthetic Pre-training?
Scaling Backwards: Minimal Synthetic Pre-training?
Ryo Nakamura
Ryu Tadokoro
Ryosuke Yamada
Tim Puhlfürß
Iro Laina
Christian Rupprecht
Walid Maalej
Rio Yokota
Hirokatsu Kataoka
DD
36
2
0
01 Aug 2024
Deep Learning of Crystalline Defects from TEM images: A Solution for the
  Problem of "Never Enough Training Data"
Deep Learning of Crystalline Defects from TEM images: A Solution for the Problem of "Never Enough Training Data"
Kishan Govind
D. Oliveros
A. Dlouhý
M. Legros
Stefan Sandfeld
30
8
0
12 Jul 2023
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
348
10,237
0
16 Nov 2016
1