ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.08838
  4. Cited By
Measuring the Intrinsic Dimension of Objective Landscapes

Measuring the Intrinsic Dimension of Objective Landscapes

24 April 2018
Chunyuan Li
Heerad Farkhoor
Rosanne Liu
J. Yosinski
ArXivPDFHTML

Papers citing "Measuring the Intrinsic Dimension of Objective Landscapes"

50 / 106 papers shown
Title
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
33
109
0
31 Aug 2022
LGV: Boosting Adversarial Example Transferability from Large Geometric
  Vicinity
LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity
Martin Gubri
Maxime Cordy
Mike Papadakis
Yves Le Traon
Koushik Sen
AAML
35
51
0
26 Jul 2022
Towards Semantic Communication Protocols: A Probabilistic Logic
  Perspective
Towards Semantic Communication Protocols: A Probabilistic Logic Perspective
Sejin Seo
Jihong Park
Seung-Woo Ko
Jinho Choi
M. Bennis
Seong-Lyun Kim
30
22
0
08 Jul 2022
When Does Differentially Private Learning Not Suffer in High Dimensions?
When Does Differentially Private Learning Not Suffer in High Dimensions?
Xuechen Li
Daogao Liu
Tatsunori Hashimoto
Huseyin A. Inan
Janardhan Kulkarni
Y. Lee
Abhradeep Thakurta
36
58
0
01 Jul 2022
LIDL: Local Intrinsic Dimension Estimation Using Approximate Likelihood
LIDL: Local Intrinsic Dimension Estimation Using Approximate Likelihood
Piotr Tempczyk
Rafał Michaluk
Łukasz Garncarek
Przemysław Spurek
Jacek Tabor
Adam Goliñski
35
26
0
29 Jun 2022
Few-Shot Learning by Dimensionality Reduction in Gradient Space
Few-Shot Learning by Dimensionality Reduction in Gradient Space
M. Gauch
M. Beck
Thomas Adler
D. Kotsur
Stefan Fiel
...
Markus Holzleitner
Werner Zellinger
D. Klotz
Sepp Hochreiter
Sebastian Lehner
48
9
0
07 Jun 2022
PERFECT: Prompt-free and Efficient Few-shot Learning with Language
  Models
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Rabeeh Karimi Mahabadi
Luke Zettlemoyer
James Henderson
Marzieh Saeidi
Lambert Mathias
Ves Stoyanov
Majid Yazdani
VLM
34
70
0
03 Apr 2022
APG: Adaptive Parameter Generation Network for Click-Through Rate
  Prediction
APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction
Bencheng Yan
Pengjie Wang
Kai Zhang
Feng Li
Hongbo Deng
Jian Xu
Bo Zheng
27
20
0
30 Mar 2022
Parameter-efficient Model Adaptation for Vision Transformers
Parameter-efficient Model Adaptation for Vision Transformers
Xuehai He
Chunyuan Li
Pengchuan Zhang
Jianwei Yang
Junfeng Fang
30
84
0
29 Mar 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
34
197
0
14 Mar 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent
  Backdoor Attacks
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
Transferability in Deep Learning: A Survey
Transferability in Deep Learning: A Survey
Junguang Jiang
Yang Shu
Jianmin Wang
Mingsheng Long
OOD
34
101
0
15 Jan 2022
Model Stability with Continuous Data Updates
Model Stability with Continuous Data Updates
Huiting Liu
Avinesh P.V.S
Siddharth Patwardhan
Peter Grasch
Sachin Agarwal
32
16
0
14 Jan 2022
Black-Box Tuning for Language-Model-as-a-Service
Black-Box Tuning for Language-Model-as-a-Service
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
50
256
0
10 Jan 2022
Intrinsic Dimension, Persistent Homology and Generalization in Neural
  Networks
Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks
Tolga Birdal
Aaron Lou
Leonidas J. Guibas
Umut cSimcsekli
32
61
0
25 Nov 2021
Subspace Adversarial Training
Subspace Adversarial Training
Tao Li
Yingwen Wu
Sizhe Chen
Kun Fang
Xiaolin Huang
AAML
OOD
44
56
0
24 Nov 2021
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
351
0
13 Oct 2021
Robust fine-tuning of zero-shot models
Robust fine-tuning of zero-shot models
Mitchell Wortsman
Gabriel Ilharco
Jong Wook Kim
Mike Li
Simon Kornblith
...
Raphael Gontijo-Lopes
Hannaneh Hajishirzi
Ali Farhadi
Hongseok Namkoong
Ludwig Schmidt
VLM
66
695
0
04 Sep 2021
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for
  Federated Learning
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
46
46
0
19 Aug 2021
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural
  Networks: A Tale of Symmetry II
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks: A Tale of Symmetry II
Yossi Arjevani
M. Field
28
18
0
21 Jul 2021
What can linear interpolation of neural network loss landscapes tell us?
What can linear interpolation of neural network loss landscapes tell us?
Tiffany J. Vlaar
Jonathan Frankle
MoMe
30
27
0
30 Jun 2021
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
  Masked Language-models
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models
Elad Ben-Zaken
Shauli Ravfogel
Yoav Goldberg
97
1,157
0
18 Jun 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
67
469
0
08 Jun 2021
Privately Learning Subspaces
Privately Learning Subspaces
Vikrant Singhal
Thomas Steinke
27
20
0
28 May 2021
Analyzing Monotonic Linear Interpolation in Neural Network Loss
  Landscapes
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
James Lucas
Juhan Bae
Michael Ruogu Zhang
Stanislav Fort
R. Zemel
Roger C. Grosse
MoMe
172
28
0
22 Apr 2021
Multiple Instance Captioning: Learning Representations from
  Histopathology Textbooks and Articles
Multiple Instance Captioning: Learning Representations from Histopathology Textbooks and Articles
Jevgenij Gamper
Nasir M. Rajpoot
27
62
0
08 Mar 2021
Learning Neural Network Subspaces
Learning Neural Network Subspaces
Mitchell Wortsman
Maxwell Horton
Carlos Guestrin
Ali Farhadi
Mohammad Rastegari
UQCV
27
85
0
20 Feb 2021
Policy Manifold Search for Improving Diversity-based Neuroevolution
Policy Manifold Search for Improving Diversity-based Neuroevolution
Nemanja Rakićević
Antoine Cully
Petar Kormushev
29
0
0
15 Dec 2020
Improving Neural Network Training in Low Dimensional Random Bases
Improving Neural Network Training in Low Dimensional Random Bases
Frithjof Gressmann
Zach Eaton-Rosen
Carlo Luschi
30
28
0
09 Nov 2020
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network
  Training
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training
Dingqing Yang
Amin Ghasemazar
X. Ren
Maximilian Golub
G. Lemieux
Mieszko Lis
22
48
0
23 Sep 2020
It's Hard for Neural Networks To Learn the Game of Life
It's Hard for Neural Networks To Learn the Game of Life
Jacob Mitchell Springer
Garrett Kenyon
27
21
0
03 Sep 2020
Graph Structure of Neural Networks
Graph Structure of Neural Networks
Jiaxuan You
J. Leskovec
Kaiming He
Saining Xie
GNN
27
137
0
13 Jul 2020
GShard: Scaling Giant Models with Conditional Computation and Automatic
  Sharding
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Dmitry Lepikhin
HyoukJoong Lee
Yuanzhong Xu
Dehao Chen
Orhan Firat
Yanping Huang
M. Krikun
Noam M. Shazeer
Zhehuai Chen
MoE
43
1,116
0
30 Jun 2020
Measuring Dataset Granularity
Measuring Dataset Granularity
Huayu Chen
Zeqi Gu
D. Mahajan
Laurens van der Maaten
Serge J. Belongie
Ser-Nam Lim
29
13
0
21 Dec 2019
Deep Ensembles: A Loss Landscape Perspective
Deep Ensembles: A Loss Landscape Perspective
Stanislav Fort
Huiyi Hu
Balaji Lakshminarayanan
OOD
UQCV
29
619
0
05 Dec 2019
Emergent properties of the local geometry of neural loss landscapes
Emergent properties of the local geometry of neural loss landscapes
Stanislav Fort
Surya Ganguli
14
50
0
14 Oct 2019
Soft-Label Dataset Distillation and Text Dataset Distillation
Soft-Label Dataset Distillation and Text Dataset Distillation
Ilia Sucholutsky
Matthias Schonlau
DD
33
132
0
06 Oct 2019
A deep-learning-based surrogate model for data assimilation in dynamic
  subsurface flow problems
A deep-learning-based surrogate model for data assimilation in dynamic subsurface flow problems
Meng Tang
Yimin Liu
L. Durlofsky
AI4CE
32
257
0
16 Aug 2019
Multi-task Self-Supervised Learning for Human Activity Detection
Multi-task Self-Supervised Learning for Human Activity Detection
Aaqib Saeed
T. Ozcelebi
J. Lukkien
SSL
23
270
0
27 Jul 2019
Subspace Inference for Bayesian Deep Learning
Subspace Inference for Bayesian Deep Learning
Pavel Izmailov
Wesley J. Maddox
Polina Kirichenko
T. Garipov
Dmitry Vetrov
A. Wilson
UQCV
BDL
38
143
0
17 Jul 2019
What does it mean to understand a neural network?
What does it mean to understand a neural network?
Timothy Lillicrap
Konrad Paul Kording
18
42
0
15 Jul 2019
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient
  Black-box Attacks
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks
Ziang Yan
Yiwen Guo
Changshui Zhang
AAML
30
110
0
11 Jun 2019
Weight Agnostic Neural Networks
Weight Agnostic Neural Networks
Adam Gaier
David R Ha
OOD
38
239
0
11 Jun 2019
Spectral Metric for Dataset Complexity Assessment
Spectral Metric for Dataset Complexity Assessment
Frederic Branchaud-Charron
Andrew Achkar
Pierre-Marc Jodoin
23
28
0
17 May 2019
On Scalable and Efficient Computation of Large Scale Optimal Transport
On Scalable and Efficient Computation of Large Scale Optimal Transport
Yujia Xie
Minshuo Chen
Haoming Jiang
T. Zhao
H. Zha
OT
16
42
0
01 May 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
37
307
0
15 Feb 2019
An Investigation into Neural Net Optimization via Hessian Eigenvalue
  Density
An Investigation into Neural Net Optimization via Hessian Eigenvalue Density
Behrooz Ghorbani
Shankar Krishnan
Ying Xiao
ODL
18
317
0
29 Jan 2019
Stiffness: A New Perspective on Generalization in Neural Networks
Stiffness: A New Perspective on Generalization in Neural Networks
Stanislav Fort
Pawel Krzysztof Nowak
Stanislaw Jastrzebski
S. Narayanan
24
94
0
28 Jan 2019
An Empirical Study of Example Forgetting during Deep Neural Network
  Learning
An Empirical Study of Example Forgetting during Deep Neural Network Learning
Mariya Toneva
Alessandro Sordoni
Rémi Tachet des Combes
Adam Trischler
Yoshua Bengio
Geoffrey J. Gordon
46
715
0
12 Dec 2018
Agent Embeddings: A Latent Representation for Pole-Balancing Networks
Agent Embeddings: A Latent Representation for Pole-Balancing Networks
Oscar Chang
Robert Kwiatkowski
Siyuan Chen
Hod Lipson
27
6
0
12 Nov 2018
Previous
123
Next