Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.08054
Cited By
Divide and Contrast: Self-supervised Learning from Uncurated Data
17 May 2021
Yonglong Tian
Olivier J. Hénaff
Aaron van den Oord
SSL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Divide and Contrast: Self-supervised Learning from Uncurated Data"
32 / 32 papers shown
Title
SimMIL: A Universal Weakly Supervised Pre-Training Framework for Multi-Instance Learning in Whole Slide Pathology Images
Yicheng Song
Tiancheng Lin
Die Peng
Su Yang
Yi Xu
MedIm
31
0
0
10 May 2025
Efficient Self-Supervised Learning for Earth Observation via Dynamic Dataset Curation
Thomas Kerdreux
A. Tuel
Quentin Febvre
A. Mouche
Bertrand Chapron
73
0
0
09 Apr 2025
A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning
Xin Wen
Bingchen Zhao
Yilun Chen
Jiangmiao Pang
Xiaojuan Qi
LM&Ro
46
0
0
10 Mar 2025
ELIP: Enhanced Visual-Language Foundation Models for Image Retrieval
Guanqi Zhan
Yuanpei Liu
Kai Han
Weidi Xie
Andrew Zisserman
VLM
171
0
0
21 Feb 2025
Self-Masking Networks for Unsupervised Adaptation
Alfonso Taboada Warmerdam
Mathilde Caron
Yuki M. Asano
43
1
0
11 Sep 2024
Predicting the Best of N Visual Trackers
B. Alawode
S. Javed
Arif Mahmood
Jirí Matas
46
1
0
22 Jul 2024
A Review on Discriminative Self-supervised Learning Methods in Computer Vision
Nikolaos Giakoumoglou
Tania Stathaki
Athanasios Gkelias
SSL
64
0
0
08 May 2024
RudolfV: A Foundation Model by Pathologists for Pathologists
Jonas Dippel
Barbara Feulner
Tobias Winterhoff
Timo Milbich
Stephan Tietz
...
David Horst
Lukas Ruff
Klaus-Robert Muller
Frederick Klauschen
Maximilian Alber
36
29
0
08 Jan 2024
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab
Timothée Darcet
Théo Moutakanni
Huy Q. Vo
Marc Szafraniec
...
Hervé Jégou
Julien Mairal
Patrick Labatut
Armand Joulin
Piotr Bojanowski
VLM
CLIP
SSL
110
3,030
0
14 Apr 2023
A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Florian Bordes
Samuel Lavoie
Randall Balestriero
Nicolas Ballas
Pascal Vincent
SSL
32
5
0
11 Apr 2023
Learning Visual Representations via Language-Guided Sampling
Mohamed El Banani
Karan Desai
Justin Johnson
SSL
VLM
16
28
0
23 Feb 2023
Learning Dense Object Descriptors from Multiple Views for Low-shot Category Generalization
Stefan Stojanov
Anh Thai
Zixuan Huang
James M. Rehg
31
1
0
28 Nov 2022
A simple, efficient and scalable contrastive masked autoencoder for learning visual representations
Shlok Kumar Mishra
Joshua Robinson
Huiwen Chang
David Jacobs
Aaron Sarna
Aaron Maschinot
Dilip Krishnan
DiffM
43
30
0
30 Oct 2022
Granularity-aware Adaptation for Image Retrieval over Multiple Tasks
Jon Almazán
ByungSoo Ko
Geonmo Gu
Diane Larlus
Yannis Kalantidis
ObjD
VLM
36
7
0
05 Oct 2022
Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods
Skanda Koppula
Yazhe Li
Evan Shelhamer
Andrew Jaegle
Nikhil Parthasarathy
Relja Arandjelović
João Carreira
Olivier J. Hénaff
33
9
0
30 Sep 2022
On the Pros and Cons of Momentum Encoder in Self-Supervised Visual Representation Learning
T. Pham
Chaoning Zhang
Axi Niu
Kang Zhang
Chang D. Yoo
36
11
0
11 Aug 2022
OpenCon: Open-world Contrastive Learning
Yiyou Sun
Yixuan Li
VLM
SSL
DRL
49
39
0
04 Aug 2022
Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt
Sören Mindermann
J. Brauner
Muhammed Razzak
Mrinank Sharma
Andreas Kirsch
...
Benedikt Höltgen
Aidan N. Gomez
Adrien Morisot
Sebastian Farquhar
Y. Gal
49
148
0
14 Jun 2022
CYBORGS: Contrastively Bootstrapping Object Representations by Grounding in Segmentation
Renhao Wang
Hang Zhao
Yang Gao
SSL
26
1
0
17 Mar 2022
Object discovery and representation networks
Olivier J. Hénaff
Skanda Koppula
Evan Shelhamer
Daniel Zoran
Andrew Jaegle
Andrew Zisserman
João Carreira
Relja Arandjelović
44
87
0
16 Mar 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
34
32
0
22 Feb 2022
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Nenad Tomašev
Ioana Bica
Brian McWilliams
Lars Buesing
Razvan Pascanu
Charles Blundell
Jovana Mitrović
SSL
84
81
0
13 Jan 2022
SLIP: Self-supervision meets Language-Image Pre-training
Norman Mu
Alexander Kirillov
David A. Wagner
Saining Xie
VLM
CLIP
60
477
0
23 Dec 2021
Self-Supervised Models are Continual Learners
Enrico Fini
Victor G. Turrisi da Costa
Xavier Alameda-Pineda
Elisa Ricci
Alahari Karteek
Julien Mairal
BDL
CLL
SSL
38
158
0
08 Dec 2021
A data-centric approach for improving ambiguous labels with combined semi-supervised classification and clustering
Lars Schmarje
M. Santarossa
Simon-Martin Schroder
Claudius Zelenka
R. Kiko
J. Stracke
N. Volkmann
Reinhard Koch
30
10
0
30 Jun 2021
Efficient Self-supervised Vision Transformers for Representation Learning
Chunyuan Li
Jianwei Yang
Pengchuan Zhang
Mei Gao
Bin Xiao
Xiyang Dai
Lu Yuan
Jianfeng Gao
ViT
37
209
0
17 Jun 2021
Poisoning and Backdooring Contrastive Learning
Nicholas Carlini
Andreas Terzis
32
156
0
17 Jun 2021
Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals
Wouter Van Gansbeke
Simon Vandenhende
Stamatios Georgoulis
Luc Van Gool
SSL
188
250
0
11 Feb 2021
PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding
Saining Xie
Jiatao Gu
Demi Guo
C. Qi
Leonidas J. Guibas
Or Litany
3DPC
141
622
0
21 Jul 2020
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
267
3,371
0
09 Mar 2020
A Mutual Information Maximization Perspective of Language Representation Learning
Lingpeng Kong
Cyprien de Masson dÁutume
Wang Ling
Lei Yu
Zihang Dai
Dani Yogatama
SSL
214
165
0
18 Oct 2019
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
261
1,275
0
06 Mar 2017
1