ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12213
  4. Cited By
Where is the Information in a Deep Neural Network?

Where is the Information in a Deep Neural Network?

29 May 2019
Alessandro Achille
Giovanni Paolini
Stefano Soatto
ArXivPDFHTML

Papers citing "Where is the Information in a Deep Neural Network?"

29 / 29 papers shown
Title
Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis
Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis
Ruijun Deng
Zhihui Lu
Qiang Duan
FedML
46
0
0
14 Apr 2025
FisherTune: Fisher-Guided Robust Tuning of Vision Foundation Models for Domain Generalized Segmentation
FisherTune: Fisher-Guided Robust Tuning of Vision Foundation Models for Domain Generalized Segmentation
Dong Zhao
Jinlong Li
Shuang Wang
Mengyao Wu
Qi Zang
N. Sebe
Zhun Zhong
211
0
0
23 Mar 2025
Taming AI Bots: Controllability of Neural States in Large Language
  Models
Taming AI Bots: Controllability of Neural States in Large Language Models
Stefano Soatto
Paulo Tabuada
Pratik Chaudhari
Tianwei Liu
LLMAG
LM&Ro
18
13
0
29 May 2023
AI Model Disgorgement: Methods and Choices
AI Model Disgorgement: Methods and Choices
Alessandro Achille
Michael Kearns
Carson Klingenberg
Stefano Soatto
MU
36
11
0
07 Apr 2023
Inversion dynamics of class manifolds in deep learning reveals tradeoffs
  underlying generalisation
Inversion dynamics of class manifolds in deep learning reveals tradeoffs underlying generalisation
Simone Ciceri
Lorenzo Cassani
Matteo Osella
P. Rotondo
P. Pizzochero
M. Gherardi
39
7
0
09 Mar 2023
Two Facets of SDE Under an Information-Theoretic Lens: Generalization of
  SGD via Training Trajectories and via Terminal States
Two Facets of SDE Under an Information-Theoretic Lens: Generalization of SGD via Training Trajectories and via Terminal States
Ziqiao Wang
Yongyi Mao
30
10
0
19 Nov 2022
Towards Semantic Communication Protocols: A Probabilistic Logic
  Perspective
Towards Semantic Communication Protocols: A Probabilistic Logic Perspective
Sejin Seo
Jihong Park
Seung-Woo Ko
Jinho Choi
M. Bennis
Seong-Lyun Kim
30
22
0
08 Jul 2022
On Leave-One-Out Conditional Mutual Information For Generalization
On Leave-One-Out Conditional Mutual Information For Generalization
Mohamad Rida Rammal
Alessandro Achille
Aditya Golatkar
Suhas Diggavi
Stefano Soatto
VLM
36
5
0
01 Jul 2022
HEAM: High-Efficiency Approximate Multiplier Optimization for Deep
  Neural Networks
HEAM: High-Efficiency Approximate Multiplier Optimization for Deep Neural Networks
Su Zheng
Zhen Li
Yao Lu
Jingbo Gao
Jide Zhang
Lingli Wang
17
5
0
20 Jan 2022
Training Neural Networks with Fixed Sparse Masks
Training Neural Networks with Fixed Sparse Masks
Yi-Lin Sung
Varun Nair
Colin Raffel
FedML
32
197
0
18 Nov 2021
Merging Models with Fisher-Weighted Averaging
Merging Models with Fisher-Weighted Averaging
Michael Matena
Colin Raffel
FedML
MoMe
50
354
0
18 Nov 2021
Fast Yet Effective Machine Unlearning
Fast Yet Effective Machine Unlearning
Ayush K Tarun
Vikram S Chundawat
Murari Mandal
Mohan S. Kankanhalli
MU
31
174
0
17 Nov 2021
Does the Data Induce Capacity Control in Deep Learning?
Does the Data Induce Capacity Control in Deep Learning?
Rubing Yang
Jialin Mao
Pratik Chaudhari
35
15
0
27 Oct 2021
Deep Neural Networks and Tabular Data: A Survey
Deep Neural Networks and Tabular Data: A Survey
V. Borisov
Tobias Leemann
Kathrin Seßler
Johannes Haug
Martin Pawelczyk
Gjergji Kasneci
LMTD
47
650
0
05 Oct 2021
Estimating informativeness of samples with Smooth Unique Information
Estimating informativeness of samples with Smooth Unique Information
Hrayr Harutyunyan
Alessandro Achille
Giovanni Paolini
Orchid Majumder
Avinash Ravichandran
Rahul Bhotika
Stefano Soatto
27
24
0
17 Jan 2021
Mixed-Privacy Forgetting in Deep Networks
Mixed-Privacy Forgetting in Deep Networks
Aditya Golatkar
Alessandro Achille
Avinash Ravichandran
M. Polito
Stefano Soatto
CLL
MU
130
160
0
24 Dec 2020
Generalized Variational Continual Learning
Generalized Variational Continual Learning
Noel Loo
S. Swaroop
Richard Turner
BDL
CLL
33
58
0
24 Nov 2020
Artificial Neural Variability for Deep Learning: On Overfitting, Noise
  Memorization, and Catastrophic Forgetting
Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting
Zeke Xie
Fengxiang He
Shaopeng Fu
Issei Sato
Dacheng Tao
Masashi Sugiyama
21
60
0
12 Nov 2020
Whitening and second order optimization both make information in the
  dataset unusable during training, and can reduce or prevent generalization
Whitening and second order optimization both make information in the dataset unusable during training, and can reduce or prevent generalization
Neha S. Wadia
Daniel Duckworth
S. Schoenholz
Ethan Dyer
Jascha Narain Sohl-Dickstein
27
13
0
17 Aug 2020
Optimizing Information Loss Towards Robust Neural Networks
Optimizing Information Loss Towards Robust Neural Networks
Philip Sperl
Konstantin Böttinger
AAML
18
3
0
07 Aug 2020
Adversarial Training Reduces Information and Improves Transferability
Adversarial Training Reduces Information and Improves Transferability
M. Terzi
Alessandro Achille
Marco Maggipinto
Gian Antonio Susto
AAML
24
23
0
22 Jul 2020
Backpropagated Gradient Representations for Anomaly Detection
Backpropagated Gradient Representations for Anomaly Detection
Gukyeong Kwon
Mohit Prabhushankar
Dogancan Temel
Ghassan AlRegib
30
71
0
18 Jul 2020
Forgetting Outside the Box: Scrubbing Deep Networks of Information
  Accessible from Input-Output Observations
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations
Aditya Golatkar
Alessandro Achille
Stefano Soatto
MU
OOD
22
189
0
05 Mar 2020
A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient
  Descent Exponentially Favors Flat Minima
A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima
Zeke Xie
Issei Sato
Masashi Sugiyama
ODL
28
17
0
10 Feb 2020
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Thomas Baumhauer
Pascal Schöttle
Matthias Zeppelzauer
MU
114
130
0
07 Feb 2020
Information in Infinite Ensembles of Infinitely-Wide Neural Networks
Information in Infinite Ensembles of Infinitely-Wide Neural Networks
Ravid Shwartz-Ziv
Alexander A. Alemi
27
21
0
20 Nov 2019
Information-Theoretic Generalization Bounds for SGLD via Data-Dependent
  Estimates
Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates
Jeffrey Negrea
Mahdi Haghifam
Gintare Karolina Dziugaite
Ashish Khisti
Daniel M. Roy
FedML
115
147
0
06 Nov 2019
A General Framework for Uncertainty Estimation in Deep Learning
A General Framework for Uncertainty Estimation in Deep Learning
Antonio Loquercio
Mattia Segu
Davide Scaramuzza
UQCV
BDL
OOD
31
289
0
16 Jul 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,892
0
15 Sep 2016
1