ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.09189
  4. Cited By
Information in Infinite Ensembles of Infinitely-Wide Neural Networks

Information in Infinite Ensembles of Infinitely-Wide Neural Networks

20 November 2019
Ravid Shwartz-Ziv
Alexander A. Alemi
ArXivPDFHTML

Papers citing "Information in Infinite Ensembles of Infinitely-Wide Neural Networks"

8 / 8 papers shown
Title
Reverse Engineering Self-Supervised Learning
Reverse Engineering Self-Supervised Learning
Ido Ben-Shaul
Ravid Shwartz-Ziv
Tomer Galanti
S. Dekel
Yann LeCun
SSL
26
34
0
24 May 2023
Wide Bayesian neural networks have a simple weight posterior: theory and
  accelerated sampling
Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Jiri Hron
Roman Novak
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
48
6
0
15 Jun 2022
Machine Learning and Deep Learning -- A review for Ecologists
Machine Learning and Deep Learning -- A review for Ecologists
Maximilian Pichler
F. Hartig
45
127
0
11 Apr 2022
Estimating informativeness of samples with Smooth Unique Information
Estimating informativeness of samples with Smooth Unique Information
Hrayr Harutyunyan
Alessandro Achille
Giovanni Paolini
Orchid Majumder
Avinash Ravichandran
Rahul Bhotika
Stefano Soatto
27
24
0
17 Jan 2021
Whitening and second order optimization both make information in the
  dataset unusable during training, and can reduce or prevent generalization
Whitening and second order optimization both make information in the dataset unusable during training, and can reduce or prevent generalization
Neha S. Wadia
Daniel Duckworth
S. Schoenholz
Ethan Dyer
Jascha Narain Sohl-Dickstein
27
13
0
17 Aug 2020
On Information Plane Analyses of Neural Network Classifiers -- A Review
On Information Plane Analyses of Neural Network Classifiers -- A Review
Bernhard C. Geiger
32
50
0
21 Mar 2020
Forgetting Outside the Box: Scrubbing Deep Networks of Information
  Accessible from Input-Output Observations
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations
Aditya Golatkar
Alessandro Achille
Stefano Soatto
MU
OOD
22
189
0
05 Mar 2020
On the infinite width limit of neural networks with a standard
  parameterization
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
32
47
0
21 Jan 2020
1