ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.15327
  4. Cited By
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural
  Network Representations Vary with Width and Depth

Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth

29 October 2020
Thao Nguyen
M. Raghu
Simon Kornblith
    OOD
ArXivPDFHTML

Papers citing "Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth"

50 / 64 papers shown
Title
LoRASuite: Efficient LoRA Adaptation Across Large Language Model Upgrades
LoRASuite: Efficient LoRA Adaptation Across Large Language Model Upgrades
Yanan Li
Fanxu Meng
Muhan Zhang
Shiai Zhu
Shangguang Wang
Mengwei Xu
MoMe
2
0
0
17 May 2025
DiGIT: Multi-Dilated Gated Encoder and Central-Adjacent Region Integrated Decoder for Temporal Action Detection Transformer
DiGIT: Multi-Dilated Gated Encoder and Central-Adjacent Region Integrated Decoder for Temporal Action Detection Transformer
Ho-Joong Kim
Y. E. Lee
Jung-Ho Hong
Seong-Whan Lee
47
0
0
09 May 2025
Advancing 3D Medical Image Segmentation: Unleashing the Potential of Planarian Neural Networks in Artificial Intelligence
Advancing 3D Medical Image Segmentation: Unleashing the Potential of Planarian Neural Networks in Artificial Intelligence
Ziyuan Huang
Kevin Huggins
Srikar Bellur
3DV
54
0
0
07 May 2025
Representational Similarity via Interpretable Visual Concepts
Representational Similarity via Interpretable Visual Concepts
Neehar Kondapaneni
Oisin Mac Aodha
Pietro Perona
DRL
219
0
0
19 Mar 2025
A super-resolution reconstruction method for lightweight building images based on an expanding feature modulation network
A super-resolution reconstruction method for lightweight building images based on an expanding feature modulation network
Yi Zhang
Wenye Zhou
Ruonan Lin
SupR
65
0
0
17 Mar 2025
Measuring Error Alignment for Decision-Making Systems
Measuring Error Alignment for Decision-Making Systems
Binxia Xu
Antonis Bikakis
Daniel Onah
A. Vlachidis
Luke Dickens
41
0
0
03 Jan 2025
Intrinsic Dimension Correlation: uncovering nonlinear connections in multimodal representations
Intrinsic Dimension Correlation: uncovering nonlinear connections in multimodal representations
Lorenzo Basile
Santiago Acevedo
Luca Bortolussi
Fabio Anselmi
Alex Rodriguez
48
4
0
22 Jun 2024
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
Sicen Guo
Zhiyuan Wu
Qijun Chen
Ioannis Pitas
Rui Fan
Rui Fan
40
1
0
13 Mar 2024
What Do Self-Supervised Speech and Speaker Models Learn? New Findings
  From a Cross Model Layer-Wise Analysis
What Do Self-Supervised Speech and Speaker Models Learn? New Findings From a Cross Model Layer-Wise Analysis
Takanori Ashihara
Marc Delcroix
Takafumi Moriya
Kohei Matsuura
Taichi Asami
Yusuke Ijima
SSL
24
7
0
31 Jan 2024
ExpPoint-MAE: Better interpretability and performance for
  self-supervised point cloud transformers
ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers
Ioannis Romanelis
Vlassis Fotis
Konstantinos Moustakas
Adrian Munteanu
ViT
3DPC
28
4
0
19 Jun 2023
Efficient Mixed Transformer for Single Image Super-Resolution
Efficient Mixed Transformer for Single Image Super-Resolution
Ling Zheng
Jinchen Zhu
Jinpeng Shi
Shizhuang Weng
40
19
0
19 May 2023
Multi-Path Transformer is Better: A Case Study on Neural Machine
  Translation
Multi-Path Transformer is Better: A Case Study on Neural Machine Translation
Ye Lin
Shuhan Zhou
Yanyang Li
Anxiang Ma
Tong Xiao
Jingbo Zhu
38
0
0
10 May 2023
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Max Klabunde
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
58
66
0
10 May 2023
Sparsified Model Zoo Twins: Investigating Populations of Sparsified
  Neural Network Models
Sparsified Model Zoo Twins: Investigating Populations of Sparsified Neural Network Models
D. Honegger
Konstantin Schurholt
Damian Borth
35
4
0
26 Apr 2023
Uncovering the Representation of Spiking Neural Networks Trained with
  Surrogate Gradient
Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient
Yuhang Li
Youngeun Kim
Hyoungseob Park
Priyadarshini Panda
32
16
0
25 Apr 2023
Revisiting the Evaluation of Image Synthesis with GANs
Revisiting the Evaluation of Image Synthesis with GANs
Mengping Yang
Ceyuan Yang
Yichi Zhang
Qingyan Bai
Yujun Shen
Bo Dai
EGVM
41
7
0
04 Apr 2023
The Framework Tax: Disparities Between Inference Efficiency in NLP
  Research and Deployment
The Framework Tax: Disparities Between Inference Efficiency in NLP Research and Deployment
Jared Fernandez
Jacob Kahn
Clara Na
Yonatan Bisk
Emma Strubell
FedML
33
10
0
13 Feb 2023
FlexiViT: One Model for All Patch Sizes
FlexiViT: One Model for All Patch Sizes
Lucas Beyer
Pavel Izmailov
Alexander Kolesnikov
Mathilde Caron
Simon Kornblith
Xiaohua Zhai
Matthias Minderer
Michael Tschannen
Ibrahim M. Alabdulmohsin
Filip Pavetić
VLM
45
90
0
15 Dec 2022
On the effectiveness of partial variance reduction in federated learning
  with heterogeneous data
On the effectiveness of partial variance reduction in federated learning with heterogeneous data
Bo-wen Li
Mikkel N. Schmidt
T. S. Alstrøm
Sebastian U. Stich
FedML
37
9
0
05 Dec 2022
ModelDiff: A Framework for Comparing Learning Algorithms
ModelDiff: A Framework for Comparing Learning Algorithms
Harshay Shah
Sung Min Park
Andrew Ilyas
A. Madry
SyDa
54
26
0
22 Nov 2022
On the Effect of Pre-training for Transformer in Different Modality on
  Offline Reinforcement Learning
On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning
S. Takagi
OffRL
18
7
0
17 Nov 2022
Boosting vision transformers for image retrieval
Boosting vision transformers for image retrieval
Chull Hwan Song
Jooyoung Yoon
Shunghyun Choi
Yannis Avrithis
ViT
34
32
0
21 Oct 2022
Packed-Ensembles for Efficient Uncertainty Estimation
Packed-Ensembles for Efficient Uncertainty Estimation
Olivier Laurent
Adrien Lafage
Enzo Tartaglione
Geoffrey Daniel
Jean-Marc Martinez
Andrei Bursuc
Gianni Franchi
OODD
46
32
0
17 Oct 2022
When does deep learning fail and how to tackle it? A critical analysis
  on polymer sequence-property surrogate models
When does deep learning fail and how to tackle it? A critical analysis on polymer sequence-property surrogate models
Himanshu
T. Patra
AI4CE
15
0
0
12 Oct 2022
Boosting Graph Neural Networks via Adaptive Knowledge Distillation
Boosting Graph Neural Networks via Adaptive Knowledge Distillation
Zhichun Guo
Chunhui Zhang
Yujie Fan
Yijun Tian
Chuxu Zhang
Nitesh V. Chawla
26
32
0
12 Oct 2022
The Dynamic of Consensus in Deep Networks and the Identification of
  Noisy Labels
The Dynamic of Consensus in Deep Networks and the Identification of Noisy Labels
Daniel Shwartz
Uri Stern
D. Weinshall
NoLa
36
2
0
02 Oct 2022
Model Zoos: A Dataset of Diverse Populations of Neural Network Models
Model Zoos: A Dataset of Diverse Populations of Neural Network Models
Konstantin Schurholt
Diyar Taskiran
Boris Knyazev
Xavier Giró-i-Nieto
Damian Borth
60
29
0
29 Sep 2022
MAC: A Meta-Learning Approach for Feature Learning and Recombination
MAC: A Meta-Learning Approach for Feature Learning and Recombination
S. Tiwari
M. Gogoi
S. Verma
K. P. Singh
CLL
37
1
0
20 Sep 2022
Git Re-Basin: Merging Models modulo Permutation Symmetries
Git Re-Basin: Merging Models modulo Permutation Symmetries
Samuel K. Ainsworth
J. Hayase
S. Srinivasa
MoMe
255
318
0
11 Sep 2022
Curbing Task Interference using Representation Similarity-Guided
  Multi-Task Feature Sharing
Curbing Task Interference using Representation Similarity-Guided Multi-Task Feature Sharing
Naresh Gurulingan
Elahe Arani
Bahram Zonooz
24
1
0
19 Aug 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of
  Deep Neural Networks
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
23
124
0
27 Jul 2022
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
The Neural Race Reduction: Dynamics of Abstraction in Gated Networks
Andrew M. Saxe
Shagun Sodhani
Sam Lewallen
AI4CE
32
34
0
21 Jul 2022
Large-scale Robustness Analysis of Video Action Recognition Models
Large-scale Robustness Analysis of Video Action Recognition Models
Madeline Chantry Schiappa
Naman Biyani
Prudvi Kamtam
Shruti Vyas
Hamid Palangi
Vibhav Vineet
Yogesh S Rawat
AAML
37
24
0
04 Jul 2022
Neural Networks as Paths through the Space of Representations
Neural Networks as Paths through the Space of Representations
Richard D. Lange
Devin Kwok
Jordan K Matelsky
Xinyue Wang
David Rolnick
Konrad Paul Kording
37
4
0
22 Jun 2022
DORA: Exploring Outlier Representations in Deep Neural Networks
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov
Mayukh Deb
Dennis Grinwald
Klaus-Robert Muller
Marina M.-C. Höhne
27
12
0
09 Jun 2022
What do CNNs Learn in the First Layer and Why? A Linear Systems
  Perspective
What do CNNs Learn in the First Layer and Why? A Linear Systems Perspective
Rhea Chowers
Yair Weiss
33
2
0
06 Jun 2022
A Closer Look at Self-Supervised Lightweight Vision Transformers
A Closer Look at Self-Supervised Lightweight Vision Transformers
Shaoru Wang
Jin Gao
Zeming Li
Jian Sun
Weiming Hu
ViT
73
42
0
28 May 2022
On the Symmetries of Deep Learning Models and their Internal
  Representations
On the Symmetries of Deep Learning Models and their Internal Representations
Charles Godfrey
Davis Brown
Tegan H. Emerson
Henry Kvinge
28
40
0
27 May 2022
Embedding Principle in Depth for the Loss Landscape Analysis of Deep Neural Networks
Embedding Principle in Depth for the Loss Landscape Analysis of Deep Neural Networks
Zhiwei Bai
Tao Luo
Z. Xu
Yaoyu Zhang
31
5
0
26 May 2022
When does dough become a bagel? Analyzing the remaining mistakes on
  ImageNet
When does dough become a bagel? Analyzing the remaining mistakes on ImageNet
Vijay Vasudevan
Benjamin Caine
Raphael Gontijo-Lopes
Sara Fridovich-Keil
Rebecca Roelofs
VLM
UQCV
48
57
0
09 May 2022
Explaining the Effectiveness of Multi-Task Learning for Efficient
  Knowledge Extraction from Spine MRI Reports
Explaining the Effectiveness of Multi-Task Learning for Efficient Knowledge Extraction from Spine MRI Reports
Arijit Sehanobish
M. Sandora
Nabila Abraham
Jayashri Pawar
Danielle Torres
Anasuya Das
M. Becker
Richard Herzog
Benjamin Odry
Ron Vianu
21
3
0
06 May 2022
Machine Learning and Deep Learning -- A review for Ecologists
Machine Learning and Deep Learning -- A review for Ecologists
Maximilian Pichler
F. Hartig
45
127
0
11 Apr 2022
Universal Representations: A Unified Look at Multiple Task and Domain
  Learning
Universal Representations: A Unified Look at Multiple Task and Domain Learning
Wei-Hong Li
Xialei Liu
Hakan Bilen
SSL
OOD
30
27
0
06 Apr 2022
Online Convolutional Re-parameterization
Online Convolutional Re-parameterization
Mu Hu
Junyi Feng
Jiashen Hua
Baisheng Lai
Jianqiang Huang
Xiaojin Gong
Xiansheng Hua
24
26
0
02 Apr 2022
What Makes Transfer Learning Work For Medical Images: Feature Reuse &
  Other Factors
What Makes Transfer Learning Work For Medical Images: Feature Reuse & Other Factors
Christos Matsoukas
Johan Fredin Haslum
Moein Sorkhei
Magnus P Soderberg
Kevin Smith
VLM
OOD
MedIm
27
85
0
02 Mar 2022
On the Origins of the Block Structure Phenomenon in Neural Network
  Representations
On the Origins of the Block Structure Phenomenon in Neural Network Representations
Thao Nguyen
M. Raghu
Simon Kornblith
25
14
0
15 Feb 2022
How Do Vision Transformers Work?
How Do Vision Transformers Work?
Namuk Park
Songkuk Kim
ViT
47
466
0
14 Feb 2022
Investigating Power laws in Deep Representation Learning
Investigating Power laws in Deep Representation Learning
Arna Ghosh
Arnab Kumar Mondal
Kumar Krishna Agrawal
Blake A. Richards
SSL
OOD
14
19
0
11 Feb 2022
Deconfounded Representation Similarity for Comparison of Neural Networks
Deconfounded Representation Similarity for Comparison of Neural Networks
Tianyu Cui
Yogesh Kumar
Pekka Marttinen
Samuel Kaski
CML
35
13
0
31 Jan 2022
Representation Topology Divergence: A Method for Comparing Neural
  Network Representations
Representation Topology Divergence: A Method for Comparing Neural Network Representations
S. Barannikov
I. Trofimov
Nikita Balabin
Evgeny Burnaev
3DPC
40
45
0
31 Dec 2021
12
Next